Global and uniform convergence of subspace correction methods for some convex optimization problems

نویسندگان

  • Xue-Cheng Tai
  • Jinchao Xu
چکیده

This paper gives some global and uniform convergence estimates for a class of subspace correction (based on space decomposition) iterative methods applied to some unconstrained convex optimization problems. Some multigrid and domain decomposition methods are also discussed as special examples for solving some nonlinear elliptic boundary value problems.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Adaptive Monotone Multigrid Methods for some Non{Smooth Optimization Problems

We consider the fast solution of non{smooth optimization problems as resulting for example from the approximation of elliptic free boundary problems of obstacle or Stefan type. Combining well{known concepts of successive subspace correction methods with convex analysis, we derive a new class of multigrid methods which are globally convergent and have logarithmic bounds of the asymptotic converg...

متن کامل

Rate of Convergence for some constraint decomposition methods for nonlinear variational inequalities

Some general subspace correction algorithms are proposed for a convex optimization problem over a convex constraint subset. One of the nontrivial applications of the algorithms is the solving of some obstacle problems by multilevel domain decomposition and multigrid methods. For domain decomposition and multigrid methods, the rate of convergence for the algorithms for obstacle problems is of th...

متن کامل

An efficient one-layer recurrent neural network for solving a class of nonsmooth optimization problems

Constrained optimization problems have a wide range of applications in science, economics, and engineering. In this paper, a neural network model is proposed to solve a class of nonsmooth constrained optimization problems with a nonsmooth convex objective function subject to nonlinear inequality and affine equality constraints. It is a one-layer non-penalty recurrent neural network based on the...

متن کامل

Modify the linear search formula in the BFGS method to achieve global convergence.

<span style="color: #333333; font-family: Calibri, sans-serif; font-size: 13.3333px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: justify; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: #ffffff; text-dec...

متن کامل

On the Behavior of Damped Quasi-Newton Methods for Unconstrained Optimization

We consider a family of damped quasi-Newton methods for solving unconstrained optimization problems. This family resembles that of Broyden with line searches, except that the change in gradients is replaced by a certain hybrid vector before updating the current Hessian approximation. This damped technique modifies the Hessian approximations so that they are maintained sufficiently positive defi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Math. Comput.

دوره 71  شماره 

صفحات  -

تاریخ انتشار 2002